我们提出了一个新问题:代理可以学习如何将以前任务中的动作结合起来,以完成新任务,就像人类一样?与模仿学习相反,没有专家数据,只有通过环境探索收集的数据。与离线增强学习相比,数据分配转移的问题更为严重。由于解决新任务的动作顺序可能是多个培训任务的轨迹段的组合,换句话说,测试任务和求解策略不直接存在于培训数据中。这使问题更加困难。我们提出了一种与内存相关的多任务方法(M3)来解决此问题。该方法包括三个阶段。首先,进行任务不足的探索以收集数据。与以前的方法不同,我们将探索数据组织到知识图中。我们根据勘探数据设计一个模型,以提取动作效果功能并将其保存在记忆中,同时训练了动作预测模型。其次,对于新任务,存储在内存中的动作效应特征用于通过基于特征分解的方法来生成候选动作。最后,一个多尺度的候选动作池和动作预测模型融合在一起,以生成完成任务的策略。实验结果表明,与基线相比,我们提出的方法的性能得到了显着提高。
translated by 谷歌翻译
通过将某些优化求解器与深神经网络相结合,深层展开网络(DUN)近年来引起了图像压缩感(CS)的广泛关注。但是,现有DUN中仍然存在几个问题:1)对于每次迭代,通常采用一个简单的堆叠卷积网络,这显然限制了这些模型的表现力。 2)培训完成后,对于任何输入内容,大多数现有DUNS的超参数均已固定,这大大削弱了其适应性。在本文中,通过展开快速迭代的收缩阈值算法(FISTA),提出了一种新颖的快速分层dun,被称为Fhdun,用于图像压缩传感,开发出了精心设计的层次结构,以合作探索富人的上下文,以探索富人的上下文。多尺度空间中的信息。为了进一步增强适应性,在我们的框架中开发了一系列的超参数生成网络,以根据输入内容动态生产相应的最佳超参数。此外,由于Fista的加速政策,新嵌入的加速模块使拟议的Fhdun节省了超过50%的迭代循环,以抵抗最近的Duns。广泛的CS实验表明,所提出的FHDUN优于现有的最新CS方法,同时保持较少的迭代。
translated by 谷歌翻译
引导过滤器是计算机视觉和计算机图形中的基本工具,旨在将结构信息从引导图像传输到目标图像。大多数现有方法构造来自指导本身的滤波器内核,而不考虑指导和目标之间的相互依赖性。然而,由于两种图像中通常存在显着不同的边沿,只需将引导的所有结构信息传送到目标即将导致各种伪像。要应对这个问题,我们提出了一个名为Deep Enterponal引导图像过滤的有效框架,其过滤过程可以完全集成两个图像中包含的互补信息。具体地,我们提出了一种注意力内核学习模块,分别从引导和目标生成双组滤波器内核,然后通过在两个图像之间建模像素方向依赖性来自适应地组合它们。同时,我们提出了一种多尺度引导图像滤波模块,以粗略的方式通过所构造的内核逐渐产生滤波结果。相应地,引入了多尺度融合策略以重用中间导点在粗略的过程中。广泛的实验表明,所提出的框架在广泛的引导图像滤波应用中,诸如引导超分辨率,横向模态恢复,纹理拆除和语义分割的最先进的方法。
translated by 谷歌翻译
基于深度网络的图像压缩感(CS)近年来引起了很多关注。然而,现有的基于深网络的CS方案以逐个块的方式重建目标图像,其导致严重的块伪像或将深网络训练为黑盒,其带来了对图像先验知识的有限识别。本文提出了一种使用非局部神经网络(NL-CSNet)的新型图像CS框架,其利用具有深度网络的非本地自相似子,提高重建质量。在所提出的NL-CSNET中,构造了两个非本地子网,用于分别利用测量域中的非本地自相似子系统和多尺度特征域。具体地,在测量域的子网中,建立用于更好的初始重建的不同图像块的测量之间的长距离依赖性。类似地,在多尺度特征域的子网中,在深度重建的多尺度空间中探讨了密集特征表示之间的亲和力。此外,开发了一种新的损失函数以增强非本地表示之间的耦合,这也能够实现NL-CSNet的端到端训练。广泛的实验表明,NL-CSNet优于现有的最先进的CS方法,同时保持快速的计算速度。
translated by 谷歌翻译
深度映射记录场景中的视点和对象之间的距离,这在许多真实应用程序中起着关键作用。然而,消费者级RGB-D相机捕获的深度图遭受了低空间分辨率。引导深度地图超分辨率(DSR)是解决此问题的流行方法,该方法试图从输入的低分辨率(LR)深度及其耦合的HR RGB图像中恢复高分辨率(HR)深度映射和作为指引。引导DSR最具挑战性的问题是如何正确选择一致的结构并传播它们,并正确处理不一致的结构。在本文中,我们提出了一种用于引导DSR的新型关注的分层多模态融合(AHMF)网络。具体地,为了有效地提取和组合来自LR深度和HR引导的相关信息,我们提出了一种基于多模态注意力的融合(MMAF)策略,包括分层卷积层,包括特征增强块,以选择有价值的功能和特征重新校准块来统一不同外观特征的方式的相似性度量。此外,我们提出了一个双向分层特征协作(BHFC)模块,以完全利用多尺度特征之间的低级空间信息和高级结构信息。实验结果表明,在重建精度,运行速度和记忆效率方面,我们的方法优于最先进的方法。
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译